intelligence explosion
Humanity in the Age of AI: Reassessing 2025's Existential-Risk Narratives
Two 2025 publications, "AI 2027" (Kokotajlo et al., 2025) and "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares, 2025), assert that superintelligent artificial intelligence will almost certainly destroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good's speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, statistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a, 2025b, 2025c) and Zuboff (2019, 2025), we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating "digital lettuce" hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability.
- Europe > Portugal > Lisbon > Lisbon (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Africa > Middle East > Tunisia > Tunis Governorate > Tunis (0.04)
- Banking & Finance (0.66)
- Information Technology (0.46)
'The biggest decision yet': Jared Kaplan on allowing AI to train itself
'The biggest decision yet': Jared Kaplan on allowing AI to train itself Anthropic's chief scientist says AI autonomy could spark a beneficial'intelligence explosion' - or be the moment humans lose control Humanity will have to decide by 2030 whether to take the "ultimate risk" of letting artificial intelligence systems train themselves to become more powerful, one of the world's leading AI scientists has said. Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve. The move could trigger a beneficial "intelligence explosion" - or be the moment humans end up losing control. In an interview about the intensely competitive race to reach artificial general intelligence (AGI) - sometimes called superintelligence - Kaplan urged international governments and society to engage in what he called "the biggest decision". Anthropic is part of a pack of frontier AI companies including OpenAI, Google DeepMind, xAI, Meta and Chinese rivals led by DeepSeek, racing for AI dominance. Its widely used AI assistant, Claude, has become particularly popular among business customers.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Ukraine (0.05)
- Oceania > Australia (0.04)
- Europe > Switzerland (0.04)
Will Humanity Be Rendered Obsolete by AI?
Louadi, Mohamed El, Romdhane, Emna Ben
This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Africa > Middle East > Tunisia > Tunis Governorate > Tunis (0.04)
- (5 more...)
- Research Report (1.00)
- Overview (1.00)
- Banking & Finance > Economy (0.46)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
Preparing for the Intelligence Explosion
MacAskill, William, Moorhouse, Fin
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges. These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (7 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (7 more...)
Two Paths for A.I.
Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He'd become convinced that the company wasn't prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in "alignment," he told me--the suite of techniques used to insure that A.I. acts in accordance with human commands and values--were lagging behind gains in intelligence.
- Asia > China (0.05)
- North America > United States > California (0.05)
- Asia > Taiwan (0.05)
Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- Europe > Russia (0.14)
- Asia > Russia (0.14)
- (38 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Leisure & Entertainment > Sports (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- (9 more...)
An AI Pause Is Humanity's Best Bet For Preventing Extinction
The existential risks posed by artificial intelligence (AI) are now widely recognized. After hundreds of industry and science leaders warned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI's ultimate threat. In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list.
- North America > United States (0.04)
- Europe > Estonia > Harju County > Tallinn (0.04)
- Asia > China (0.04)
- Asia > Bangladesh (0.04)
The 'Don't Look Up' Thinking That Could Doom Us With AI
Many companies are working to build AGI (artificial general intelligence), defined as "AI that can learn and perform most intellectual tasks that human beings can, including AI development." Below we'll discuss why this may rapidly lead to superintelligence, defined as "general intelligence far beyond human level". I'm often told that AGI and superintelligence won't happen because it's impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesn't matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.
- Media (0.47)
- Information Technology (0.47)
- Government (0.47)
- Information Technology > Artificial Intelligence > Cognitive Science (0.88)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.47)
The Best Books on Artificial Intelligence
I've read a couple of your books now, and what I want to know is this: Do you really think that artificial intelligence is a threat to the human race and could lead to our extinction? Yes, I do, but it also has the potential for enormous benefit. I do think it's probably going to be either very, very good for us or very, very bad. It's a bit like a strange attractor in chaos theory, the outcomes in the middle seem less likely. I'm reasonably hopeful because what will determine whether it's very good or very bad is largely us. We have time, certainly before artificial general intelligence (AGI) arrives. AGI is an artificial intelligence (AI) that has human-level cognitive ability, so can outperform us--or at least equal us--in every area of cognitive ability that we have. It also has volition and may be conscious, although that's not necessary. We have time before that arrives: We have time to make sure it's safe. At the same time as having scary potential, AI also brings the possibility of immortality and living forever by uploading your brain. Is that something you think will happen at some point? I certainly hope it will. Things like immortality, the complete end of poverty, the abolition of suffering, are all part of the very, very good outcome, if we get it right. If you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems. Problems like ageing and how to upload a mind into a computer, do seem, in principle, solvable. So yes, I do think they are realistic.
The basics of artificial intelligence - Dataconomy
Today, we look at the basics of artificial intelligence, which permeates almost every aspect of our lives. This article will explore the main concepts revolving around artificial intelligence and the answers to frequently asked questions without getting into technical complexities as much as possible. Artificial intelligence (AI) is a field of computer science that focuses on developing smart machines capable of accomplishing tasks that require human intellect. Most people immediately think of Artificial General Intelligence (AGI) when they hear about AI. It can perform anything that a human being can, but it does so far superior. However, the fact is that we are nowhere near to creating one.